123 research outputs found
Boosting the Adversarial Transferability of Surrogate Models with Dark Knowledge
Deep neural networks (DNNs) are vulnerable to adversarial examples. And, the
adversarial examples have transferability, which means that an adversarial
example for a DNN model can fool another model with a non-trivial probability.
This gave birth to the transfer-based attack where the adversarial examples
generated by a surrogate model are used to conduct black-box attacks. There are
some work on generating the adversarial examples from a given surrogate model
with better transferability. However, training a special surrogate model to
generate adversarial examples with better transferability is relatively
under-explored. This paper proposes a method for training a surrogate model
with dark knowledge to boost the transferability of the adversarial examples
generated by the surrogate model. This trained surrogate model is named dark
surrogate model (DSM). The proposed method for training a DSM consists of two
key components: a teacher model extracting dark knowledge, and the mixing
augmentation skill enhancing dark knowledge of training data. We conducted
extensive experiments to show that the proposed method can substantially
improve the adversarial transferability of surrogate models across different
architectures of surrogate models and optimizers for generating adversarial
examples, and it can be applied to other scenarios of transfer-based attack
that contain dark knowledge, like face verification. Our code is publicly
available at \url{https://github.com/ydc123/Dark_Surrogate_Model}.Comment: Accepted at 2023 International Conference on Tools with Artificial
Intelligence (ICTAI
Self-Supervised Transformer with Domain Adaptive Reconstruction for General Face Forgery Video Detection
Face forgery videos have caused severe social public concern, and various
detectors have been proposed recently. However, most of them are trained in a
supervised manner with limited generalization when detecting videos from
different forgery methods or real source videos. To tackle this issue, we
explore to take full advantage of the difference between real and forgery
videos by only exploring the common representation of real face videos. In this
paper, a Self-supervised Transformer cooperating with Contrastive and
Reconstruction learning (CoReST) is proposed, which is first pre-trained only
on real face videos in a self-supervised manner, and then fine-tuned a linear
head on specific face forgery video datasets. Two specific auxiliary tasks
incorporated contrastive and reconstruction learning are designed to enhance
the representation learning. Furthermore, a Domain Adaptive Reconstruction
(DAR) module is introduced to bridge the gap between different forgery domains
by reconstructing on unlabeled target videos when fine-tuning. Extensive
experiments on public datasets demonstrate that our proposed method performs
even better than the state-of-the-art supervised competitors with impressive
generalization
Federated PAC-Bayesian Learning on Non-IID data
Existing research has either adapted the Probably Approximately Correct (PAC)
Bayesian framework for federated learning (FL) or used information-theoretic
PAC-Bayesian bounds while introducing their theorems, but few considering the
non-IID challenges in FL. Our work presents the first non-vacuous federated
PAC-Bayesian bound tailored for non-IID local data. This bound assumes unique
prior knowledge for each client and variable aggregation weights. We also
introduce an objective function and an innovative Gibbs-based algorithm for the
optimization of the derived bound. The results are validated on real-world
datasets
Machine Unlearning Method Based On Projection Residual
Machine learning models (mainly neural networks) are used more and more in
real life. Users feed their data to the model for training. But these processes
are often one-way. Once trained, the model remembers the data. Even when data
is removed from the dataset, the effects of these data persist in the model.
With more and more laws and regulations around the world protecting data
privacy, it becomes even more important to make models forget this data
completely through machine unlearning.
This paper adopts the projection residual method based on Newton iteration
method. The main purpose is to implement machine unlearning tasks in the
context of linear regression models and neural network models. This method
mainly uses the iterative weighting method to completely forget the data and
its corresponding influence, and its computational cost is linear in the
feature dimension of the data. This method can improve the current machine
learning method. At the same time, it is independent of the size of the
training set. Results were evaluated by feature injection testing (FIT).
Experiments show that this method is more thorough in deleting data, which is
close to model retraining.Comment: This paper is accepted by DSAA2022. The 9th IEEE International
Conference on Data Science and Advanced Analytic
- …